Past Event: Center for Autonomy Seminar
Leveraging Human Input to Enable Robust, Interactive, and Aligned AI Systems
Daniel Brown, University of Utah
11 – 12PM
Tuesday Oct 7, 2025
POB 6.304
Abstract
Ensuring that AI systems do what we, as humans, actually want them to do, is one of the biggest open research challenges in AI alignment and safety. I lead the Aligned, Robust, and Interactive Autonomy (ARIA) Lab at the University of Utah, where we seek to directly address this challenge by enabling AI systems to interact with humans to learn aligned and robust behaviors. I will discuss some of our research on inferring human intent, understanding the robustness and alignment of AI systems, and developing AI systems that utilize uncertainty estimation to know when they should actively query humans for help.
Biography
Daniel Brown is an assistant professor in the Kahlert School of Computing and the Robotics Center at the University of Utah. He received a AAAI New Faculty Highlight Award in 2025, an NIH Trailblazer award in 2024, and was named a Robotics Science and Systems Pioneer in 2021. Daniel’s research focuses on human-AI alignment, human-robot interaction, and robot learning. His goal is to develop robots and other AI systems that can safely and efficiently interact with, learn from, teach, and empower human users. His research spans reward and preference learning, human-in-the-loop machine learning, and AI safety, with applications in assistive and medical robotics, personal AI assistants, and autonomous driving. He completed his postdoc at UC Berkeley in 2023 and received his Ph.D. in Computer Science from UT Austin in 2020.
Event information
Tuesday Oct 7, 2025